AAAI.2022 - Intelligent Robotics

Total: 8

#1 Discovering State and Action Abstractions for Generalized Task and Motion Planning [PDF] [Copy] [Kimi]

Authors: Aidan Curtis ; Tom Silver ; Joshua B. Tenenbaum ; Tomás Lozano-Pérez ; Leslie Kaelbling

Generalized planning accelerates classical planning by finding an algorithm-like policy that solves multiple instances of a task. A generalized plan can be learned from a few training examples and applied to an entire domain of problems. Generalized planning approaches perform well in discrete AI planning problems that involve large numbers of objects and extended action sequences to achieve the goal. In this paper, we propose an algorithm for learning features, abstractions, and generalized plans for continuous robotic task and motion planning (TAMP) and examine the unique difficulties that arise when forced to consider geometric and physical constraints as a part of the generalized plan. Additionally, we show that these simple generalized plans learned from only a handful of examples can be used to improve the search efficiency of TAMP solvers.

#2 Recurrent Neural Network Controllers Synthesis with Stability Guarantees for Partially Observed Systems [PDF] [Copy] [Kimi]

Authors: Fangda Gu ; He Yin ; Laurent El Ghaoui ; Murat Arcak ; Peter Seiler ; Ming Jin

Neural network controllers have become popular in control tasks thanks to their flexibility and expressivity. Stability is a crucial property for safety-critical dynamical systems, while stabilization of partially observed systems, in many cases, requires controllers to retain and process long-term memories of the past. We consider the important class of recurrent neural networks (RNN) as dynamic controllers for nonlinear uncertain partially-observed systems, and derive convex stability conditions based on integral quadratic constraints, S-lemma and sequential convexification. To ensure stability during the learning and control process, we propose a projected policy gradient method that iteratively enforces the stability conditions in the reparametrized space taking advantage of mild additional information on system dynamics. Numerical experiments show that our method learns stabilizing controllers with fewer samples and achieves higher final performance compared with policy gradient.

#3 Random Mapping Method for Large-Scale Terrain Modeling [PDF] [Copy] [Kimi]

Authors: Xu Liu ; Decai Li ; Yuqing He

The vast amount of data captured by robots in large-scale environments brings the computing and storage bottlenecks to the typical methods of modeling the spaces the robots travel in. In order to efficiently construct a compact terrain model from uncertain, incomplete point cloud data of large-scale environments, in this paper, we first propose a novel feature mapping method, named random mapping, based on the fast random construction of base functions, which can efficiently project the messy points in the low-dimensional space into the high-dimensional space where the points are approximately linearly distributed. Then, in this mapped space, we propose to learn a continuous linear regression model to represent the terrain. We show that this method can model the environments in much less computation time, memory consumption, and access time, with high accuracy. Furthermore, the models possess the generalization capabilities comparable to the performances on the training set, and its inference accuracy gradually increases as the random mapping dimension increases. To better solve the large-scale environmental modeling problem, we adopt the idea of parallel computing to train the models. This strategy greatly reduces the wall-clock time of calculation without losing much accuracy. Experiments show the effectiveness of the random mapping method and the effects of some important parameters on its performance. Moreover, we evaluate the proposed terrain modeling method based on the random mapping method and compare its performances with popular typical methods and state-of-art methods.

#4 Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning [PDF] [Copy] [Kimi]

Authors: Yecheng Jason Ma ; Andrew Shen ; Osbert Bastani ; Jayaraman Dinesh

Reinforcement Learning (RL) agents in the real world must satisfy safety constraints in addition to maximizing a reward objective. Model-based RL algorithms hold promise for reducing unsafe real-world actions: they may synthesize policies that obey all constraints using simulated samples from a learned model. However, imperfect models can result in real-world constraint violations even for actions that are predicted to satisfy all constraints. We propose Conservative and Adaptive Penalty (CAP), a model-based safe RL framework that accounts for potential modeling errors by capturing model uncertainty and adaptively exploiting it to balance the reward and the cost objectives. First, CAP inflates predicted costs using an uncertainty-based penalty. Theoretically, we show that policies that satisfy this conservative cost constraint are guaranteed to also be feasible in the true environment. We further show that this guarantees the safety of all intermediate solutions during RL training. Further, CAP adaptively tunes this penalty during training using true cost feedback from the environment. We evaluate this conservative and adaptive penalty-based approach for model-based safe RL extensively on state and image-based environments. Our results demonstrate substantial gains in sample-efficiency while incurring fewer violations than prior safe RL algorithms. Code is available at: https://github.com/Redrew/CAP

#5 CTIN: Robust Contextual Transformer Network for Inertial Navigation [PDF] [Copy] [Kimi]

Authors: Bingbing Rao ; Ehsan Kazemi ; Yifan Ding ; Devu M Shila ; Frank M Tucker ; Liqiang Wang

Recently, data-driven inertial navigation approaches have demonstrated their capability of using well-trained neural networks to obtain accurate position estimates from inertial measurement units (IMUs) measurements. In this paper, we propose a novel robust Contextual Transformer-based network for Inertial Navigation (CTIN) to accurately predict velocity and trajectory. To this end, we first design a ResNet-based encoder enhanced by local and global multi-head self-attention to capture spatial contextual information from IMU measurements. Then we fuse these spatial representations with temporal knowledge by leveraging multi-head attention in the Transformer decoder. Finally, multi-task learning with uncertainty reduction is leveraged to improve learning efficiency and prediction accuracy of velocity and trajectory. Through extensive experiments over a wide range of inertial datasets (e.g., RIDI, OxIOD, RoNIN, IDOL, and our own), CTIN is very robust and outperforms state-of-the-art models.

#6 Monocular Camera-Based Point-Goal Navigation by Learning Depth Channel and Cross-Modality Pyramid Fusion [PDF] [Copy] [Kimi]

Authors: Tianqi Tang ; Heming Du ; Xin Yu ; Yi Yang

For a monocular camera-based navigation system, if we could effectively explore scene geometric cues from RGB images, the geometry information will significantly facilitate the efficiency of the navigation system. Motivated by this, we propose a highly efficient point-goal navigation framework, dubbed Geo-Nav. In a nutshell, our Geo-Nav consists of two parts: a visual perception part and a navigation part. In the visual perception part, we firstly propose a Self-supervised Depth Estimation network (SDE) specially tailored for the monocular camera-based navigation agent. Our SDE learns a mapping from an RGB input image to its corresponding depth image by exploring scene geometric constraints in a self-consistency manner. Then, in order to achieve a representative visual representation from the RGB inputs and learned depth images, we propose a Cross-modality Pyramid Fusion module (CPF). Concretely, our CPF computes a patch-wise cross-modality correlation between different modal features and exploits the correlation to fuse and enhance features at each scale. Thanks to the patch-wise nature of our CPF, we can fuse feature maps at high resolution, allowing our visual network to perceive more image details. In the navigation part, our extracted visual representations are fed to a navigation policy network to learn how to map the visual representations to agent actions effectively. Extensive experiments on a widely-used multiple-room environment Gibson demonstrate that Geo-Nav outperforms the state-of-the-art in terms of efficiency and effectiveness.

#7 Robust Adversarial Reinforcement Learning with Dissipation Inequation Constraint [PDF] [Copy] [Kimi]

Authors: Peng Zhai ; Jie Luo ; Zhiyan Dong ; Lihua Zhang ; Shunli Wang ; Dingkang Yang

Robust adversarial reinforcement learning is an effective method to train agents to manage uncertain disturbance and modeling errors in real environments. However, for systems that are sensitive to disturbances or those that are difficult to stabilize, it is easier to learn a powerful adversary than establish a stable control policy. An improper strong adversary can destabilize the system, introduce biases in the sampling process, make the learning process unstable, and even reduce the robustness of the policy. In this study, we consider the problem of ensuring system stability during training in the adversarial reinforcement learning architecture. The dissipative principle of robust H-infinity control is extended to the Markov Decision Process, and robust stability constraints are obtained based on L2 gain performance in the reinforcement learning system. Thus, we propose a dissipation-inequation-constraint-based adversarial reinforcement learning architecture. This architecture ensures the stability of the system during training by imposing constraints on the normal and adversarial agents. Theoretically, this architecture can be applied to a large family of deep reinforcement learning algorithms. Results of experiments in MuJoCo and GymFc environments show that our architecture effectively improves the robustness of the controller against environmental changes and adapts to more powerful adversaries. Results of the flight experiments on a real quadcopter indicate that our method can directly deploy the policy trained in the simulation environment to the real environment, and our controller outperforms the PID controller based on hardware-in-the-loop. Both our theoretical and empirical results provide new and critical outlooks on the adversarial reinforcement learning architecture from a rigorous robust control perspective.

#8 Sim2Real Object-Centric Keypoint Detection and Description [PDF] [Copy] [Kimi]

Authors: Chengliang Zhong ; Chao Yang ; Fuchun Sun ; Jinshan Qi ; Xiaodong Mu ; Huaping Liu ; Wenbing Huang

Keypoint detection and description play a central role in computer vision. Most existing methods are in the form of scene-level prediction, without returning the object classes of different keypoints. In this paper, we propose the object-centric formulation, which, beyond the conventional setting, requires further identifying which object each interest point belongs to. With such fine-grained information, our framework enables more downstream potentials, such as object-level matching and pose estimation in a clustered environment. To get around the difficulty of label collection in the real world, we develop a sim2real contrastive learning mechanism that can generalize the model trained in simulation to real-world applications. The novelties of our training method are three-fold: (i) we integrate the uncertainty into the learning framework to improve feature description of hard cases, e.g., less-textured or symmetric patches; (ii) we decouple the object descriptor into two independent branches, intra-object salience and inter-object distinctness, resulting in a better pixel-wise description; (iii) we enforce cross-view semantic consistency for enhanced robustness in representation learning. Comprehensive experiments on image matching and 6D pose estimation verify the encouraging generalization ability of our method. Particularly for 6D pose estimation, our method significantly outperforms typical unsupervised/sim2real methods, achieving a closer gap with the fully supervised counterpart.